Coordinate Descent for Variance-Component Models
نویسندگان
چکیده
Variance-component models are an indispensable tool for statisticians wanting to capture both random and fixed model effects. They have applications in a wide range of scientific disciplines. While maximum likelihood estimation (MLE) is the most popular method estimating variance-component parameters, it numerically challenging large data sets. In this article, we consider class coordinate descent (CD) algorithms computing MLE. We show that basic implementation costly implement does not easily satisfy standard theoretical conditions convergence. instead propose two parameter-expanded versions CD, called PX-CD PXI-CD. These novel only converge faster than existing competitors (MM EM algorithms) but also more amenable convergence analysis. PXI-CD particularly well-suited sets—namely, as scale increases, performance gap between CD current competitor methods increases.
منابع مشابه
Accelerated Variance Reduced Block Coordinate Descent
Algorithms with fast convergence, small number of data access, and low periteration complexity are particularly favorable in the big data era, due to the demand for obtaining highly accurate solutions to problems with a large number of samples in ultra-high dimensional space. Existing algorithms lack at least one of these qualities, and thus are inefficient in handling such big data challenge. ...
متن کاملPenalized Bregman Divergence Estimation via Coordinate Descent
Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...
متن کاملF. Variance Component Models
2. In the context of a balanced one-way random effect model where the εij’s are N = nk i.i.d. N(0, σ), εij − εi. and εi′. − ε.. are independent for all choices of i, j, and i′. Proof: It suffices to show that cov(εij − εi., εi′.− ε..) = 0 for all i, i′, and j due to normality. case 1 : i = i′. cov(εij − εi., εi. − ε..) = cov(εij, εi.) − cov(εij, ε..) − cov(εi., εi.) + cov(εi., ε..) = σ/n− σ/nk ...
متن کاملLasso Regularization Paths for NARMAX Models via Coordinate Descent
We propose a new algorithm for estimating NARMAX models with L1 regularization for models represented as a linear combination of basis functions. Due to the L1-norm penalty the Lasso estimation tends to produce some coefficients that are exactly zero and hence gives interpretable models. The novelty of the contribution is the inclusion of error regressors in the Lasso estimation (which yields a...
متن کاملRegularization Paths for Generalized Linear Models via Coordinate Descent.
We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multinomial regression problems while the penalties include ℓ(1) (the lasso), ℓ(2) (ridge regression) and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path....
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Algorithms
سال: 2022
ISSN: ['1999-4893']
DOI: https://doi.org/10.3390/a15100354